大多数以前的基于学习的图形匹配算法通过丢弃一个或多个匹配约束并采用放宽的分配求解器来获取次优关卡的\ Textit {二次分配问题}(QAP)。这种放松可能实际上削弱了原始的图形匹配问题,反过来伤害了匹配的性能。在本文中,我们提出了一种基于深度学习的图形匹配框架,其适用于原始QAP而不会影响匹配约束。特别地,我们设计一个亲和分分配预测网络,共同学习一对亲和力并估计节点分配,然后我们开发由概率亲和力的可分辨率的求解器,其灵感来自对成对亲和力的概率视角。旨在获得更好的匹配结果,概率求解器以迭代方式精制估计的分配,以施加离散和一对一的匹配约束。所提出的方法是在三个普遍测试的基准(Pascal VOC,Willow Object和Spair-71K)上进行评估,并且在所有基准上表现出所有先前的最先进。
translated by 谷歌翻译
近年来,线性分配问题(LAP)的可分解求解器(LAP)引起了很多研究的关注,通常嵌入到学习框架中作为组件。然而,以前的算法,有或没有学习策略,通常随着问题大小的增量而遭受最优性的降低。在本文中,我们提出了一种基于深图网络的学习线性分配求解器。具体地,我们首先将成本矩阵转换为二分图,并将分配任务转换为从构造的图表中选择可靠的边缘的问题。随后,开发了深图网络以聚合和更新节点和边的特征。最后,网络预测指示指示赋值关系的每个边缘的标签。合成数据集的实验结果表明,我们的方法优于最先进的基线,并以问题尺寸的增量达到始终如一的高精度。此外,我们还与最先进的基线求解器相比,嵌入了所提出的求解器,进入流行的多目标跟踪(MOT)框架,以以端到端的方式训练跟踪器。 MOT基准的实验结果表明,所提出的LAP解算器通过最大的边缘改善跟踪器。
translated by 谷歌翻译
在这项工作中,我们专注于互动人类解析(IHP),旨在将人体形象分成多个人体部位,具有来自用户的相互作用的指导。这项新任务继承了人类解析的类感知属性,其无法通过通常是禁止类别的传统交互式图像分割方法很好地解决。为了解决这项新任务,我们首先利用用户点击以识别给定图像中的不同人为部分。随后将这些点击转换为语义感知的本地化映射,其与RGB图像连接以形成分割网络的输入并生成初始解析结果。为了使网络能够更好地了解用户在校正过程中的目的,我们调查了改进的几个主要方法,并揭示了基于随机采样的点击增强是推广校正效果的最佳方式。此外,我们还提出了一种语义感知损失(SP损失)来增加培训,这可以有效利用点击的语义关系以获得更好的优化。为了最好的知识,这项工作是第一次尝试在交互式设置下解决人类解析任务。我们的IHP解决方案在基准嘴唇上实现了85 \%Miou,Pascal-Person-Part和CiHP,75 \%Miou,只有1.95,3.02,2.84和每班3.09点击的Helen。这些结果表明,我们只需几个人类努力就可以获得高品质的人类解析面具。我们希望这项工作能够激励更多的研究人员在未来为IHP开发数据有效的解决方案。
translated by 谷歌翻译
近年来,由通过图表神经网络(GNN)模型的学习鉴别表现来源,深图形匹配方法在匹配语义特征的任务中取得了很大的进展。然而,这些方法通常依赖于启发式生成的图形模式,这可能引入不可靠的关系来损害匹配性能。在本文中,我们提出了一个名为Glam的联合\ EMPH {图学习和匹配}网络,以探索用于升压图形匹配的可靠图形结构。 Glam采用纯粹的关注框架,用于图形学习和图形匹配。具体而言,它采用两种类型的注意机制,自我关注和横向于任务。自我关注发现功能之​​间的关系,并通过学习结构进一步更新功能表示;并且横向计算要与特征重建匹配的两个特征集之间的横谱图相关性。此外,最终匹配解决方案直接来自横向层的输出,而不采用特定的匹配决策模块。所提出的方法是在三个流行的视觉匹配基准(Pascal VOC,Willow Object和Spair-71K)上进行评估,并且在以前的最先进的图表匹配方法中通过所有基准测试的重要利润率。此外,我们的模型学习的图形模式被验证,通过用学习的图形结构替换其手工制作的图形结构,能够显着增强先前的深度图匹配方法。
translated by 谷歌翻译
This paper revisits a fundamental problem in statistical inference from a non-asymptotic theoretical viewpoint $\unicode{x2013}$ the construction of confidence sets. We establish a finite-sample bound for the estimator, characterizing its asymptotic behavior in a non-asymptotic fashion. An important feature of our bound is that its dimension dependency is captured by the effective dimension $\unicode{x2013}$ the trace of the limiting sandwich covariance $\unicode{x2013}$ which can be much smaller than the parameter dimension in some regimes. We then illustrate how the bound can be used to obtain a confidence set whose shape is adapted to the optimization landscape induced by the loss function. Unlike previous works that rely heavily on the strong convexity of the loss function, we only assume the Hessian is lower bounded at optimum and allow it to gradually becomes degenerate. This property is formalized by the notion of generalized self-concordance which originated from convex optimization. Moreover, we demonstrate how the effective dimension can be estimated from data and characterize its estimation accuracy. We apply our results to maximum likelihood estimation with generalized linear models, score matching with exponential families, and hypothesis testing with Rao's score test.
translated by 谷歌翻译
Generative AI has matured to a point where large-scale models can generate text that seems indistinguishable from human-written text and remarkably photorealistic images. Automatically measuring how close the distribution of generated data is to the target real data distribution is a key step in diagnosing existing models and developing better models. We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images. These scores are statistical summaries of divergence frontiers capturing two types of errors in generative modeling. We explore four approaches to statistically estimate these scores: vector quantization, non-parametric estimation, classifier-based estimation, and parametric Gaussian approximations. We provide statistical bounds for the vector quantization approach. Empirically, we find that the proposed scores paired with a range of $f$-divergences and statistical estimation methods can quantify the gaps between the distributions of human-written text and those of modern neural language models by correlating with human judgments and identifying known properties of the generated texts. We conclude the paper by demonstrating its applications to other AI domains and discussing practical recommendations.
translated by 谷歌翻译
Kernels are efficient in representing nonlocal dependence and they are widely used to design operators between function spaces. Thus, learning kernels in operators from data is an inverse problem of general interest. Due to the nonlocal dependence, the inverse problem can be severely ill-posed with a data-dependent singular inversion operator. The Bayesian approach overcomes the ill-posedness through a non-degenerate prior. However, a fixed non-degenerate prior leads to a divergent posterior mean when the observation noise becomes small, if the data induces a perturbation in the eigenspace of zero eigenvalues of the inversion operator. We introduce a data-adaptive prior to achieve a stable posterior whose mean always has a small noise limit. The data-adaptive prior's covariance is the inversion operator with a hyper-parameter selected adaptive to data by the L-curve method. Furthermore, we provide a detailed analysis on the computational practice of the data-adaptive prior, and demonstrate it on Toeplitz matrices and integral operators. Numerical tests show that a fixed prior can lead to a divergent posterior mean in the presence of any of the four types of errors: discretization error, model error, partial observation and wrong noise assumption. In contrast, the data-adaptive prior always attains posterior means with small noise limits.
translated by 谷歌翻译
We present 3D Highlighter, a technique for localizing semantic regions on a mesh using text as input. A key feature of our system is the ability to interpret "out-of-domain" localizations. Our system demonstrates the ability to reason about where to place non-obviously related concepts on an input 3D shape, such as adding clothing to a bare 3D animal model. Our method contextualizes the text description using a neural field and colors the corresponding region of the shape using a probability-weighted blend. Our neural optimization is guided by a pre-trained CLIP encoder, which bypasses the need for any 3D datasets or 3D annotations. Thus, 3D Highlighter is highly flexible, general, and capable of producing localizations on a myriad of input shapes. Our code is publicly available at https://github.com/threedle/3DHighlighter.
translated by 谷歌翻译
Spectral risk objectives - also called $L$-risks - allow for learning systems to interpolate between optimizing average-case performance (as in empirical risk minimization) and worst-case performance on a task. We develop stochastic algorithms to optimize these quantities by characterizing their subdifferential and addressing challenges such as biasedness of subgradient estimates and non-smoothness of the objective. We show theoretically and experimentally that out-of-the-box approaches such as stochastic subgradient and dual averaging are hindered by bias and that our approach outperforms them.
translated by 谷歌翻译
The neuron reconstruction from raw Optical Microscopy (OM) image stacks is the basis of neuroscience. Manual annotation and semi-automatic neuron tracing algorithms are time-consuming and inefficient. Existing deep learning neuron reconstruction methods, although demonstrating exemplary performance, greatly demand complex rule-based components. Therefore, a crucial challenge is designing an end-to-end neuron reconstruction method that makes the overall framework simpler and model training easier. We propose a Neuron Reconstruction Transformer (NRTR) that, discarding the complex rule-based components, views neuron reconstruction as a direct set-prediction problem. To the best of our knowledge, NRTR is the first image-to-set deep learning model for end-to-end neuron reconstruction. In experiments using the BigNeuron and VISoR-40 datasets, NRTR achieves excellent neuron reconstruction results for comprehensive benchmarks and outperforms competitive baselines. Results of extensive experiments indicate that NRTR is effective at showing that neuron reconstruction is viewed as a set-prediction problem, which makes end-to-end model training available.
translated by 谷歌翻译